AIBase
Home
AI NEWS
AI Tools
AI Models
MCP
AI Services
AI Compute
AI Tutorial
EN

AI News

View More

Tencent Hunyuan Releases New Theory on Floating Point Quantization Training, Revealing the Limits of Large Model Training

In today's rapid development of Large Language Models (LLM), the training and inference costs of models have increasingly become a focus of research and application. Recently, the Tencent Hunyuan team released an important study that delves into the 'Scaling Laws' of low-bit floating point quantization training, which refers to the principles governing the scale of floating point quantization training. The core of this research lies in exploring how to significantly reduce computational and storage costs without sacrificing performance by lowering the precision of the model.

10.2k 12-17
Tencent Hunyuan Releases New Theory on Floating Point Quantization Training, Revealing the Limits of Large Model Training

Models

View More

ERNIE 4.5 Turbo

Baidu

ERNIE 4.5 Turbo

$0.8

Input tokens/M

$3.2

Output tokens/M

128

Context Length

AIBase
Empowering the future, your artificial intelligence solution think tank
English简体中文繁體中文にほんご
FirendLinks:
AI Newsletters AI ToolsMCP ServersAI NewsAIBaseLLM LeaderboardAI Ranking
© 2025AIBase
Business CooperationSite Map